Goto

Collaborating Authors

 Lamu


Can the US find new partners in West Africa after Niger exit?

Al Jazeera

Following 11 years of defence cooperation and millions of dollars spent on maintaining military bases, the United States officially pulled its troops out of Niger this week in a surprise divorce that experts are calling a "blow" to Washington's ambitions for influence in the troubled Sahel region of West Africa. Once-close relations between the two countries saw the US establish large, expensive military bases from which it launched surveillance drones in Niger to monitor myriad armed groups linked to al-Qaeda and ISIL (ISIS). However, those ties collapsed in March when Niger's military government, which seized power in July 2023, cancelled a decade-long security agreement and told the US, which was pushing for a transition to civilian rule, to remove its 1,100 military personnel stationed there by September 15. For months, the US has failed to either fully align with or outright oppose the ruling military, analysts say. On the one hand, Washington seemed ready to maintain defence relations with the new ruling power, but on the other, it felt compelled to denounce the coup and pause aid to Niger.


Leveraging tropical reef, bird and unrelated sounds for superior transfer learning in marine bioacoustics

Williams, Ben, van Merriënboer, Bart, Dumoulin, Vincent, Hamer, Jenny, Triantafillou, Eleni, Fleishman, Abram B., McKown, Matthew, Munger, Jill E., Rice, Aaron N., Lillis, Ashlee, White, Clemency E., Hobbs, Catherine A. D., Razak, Tries B., Jones, Kate E., Denton, Tom

arXiv.org Artificial Intelligence

Machine learning has the potential to revolutionize passive acoustic monitoring (PAM) for ecological assessments. However, high annotation and compute costs limit the field's efficacy. Generalizable pretrained networks can overcome these costs, but high-quality pretraining requires vast annotated libraries, limiting its current applicability primarily to bird taxa. Here, we identify the optimum pretraining strategy for a data-deficient domain using coral reef bioacoustics. We assemble ReefSet, a large annotated library of reef sounds, though modest compared to bird libraries at 2% of the sample count. Through testing few-shot transfer learning performance, we observe that pretraining on bird audio provides notably superior generalizability compared to pretraining on ReefSet or unrelated audio alone. However, our key findings show that cross-domain mixing which leverages bird, reef and unrelated audio during pretraining maximizes reef generalizability. SurfPerch, our pretrained network, provides a strong foundation for automated analysis of marine PAM data with minimal annotation and compute costs.


BART-SIMP: a novel framework for flexible spatial covariate modeling and prediction using Bayesian additive regression trees

Jiang, Alex Ziyu, Wakefield, Jon

arXiv.org Machine Learning

Prediction is a classic challenge in spatial statistics and the inclusion of spatial covariates can greatly improve predictive performance when incorporated into a model with latent spatial effects. It is desirable to develop flexible regression models that allow for nonlinearities and interactions in the covariate structure. Machine learning models have been suggested in the spatial context, allowing for spatial dependence in the residuals, but fail to provide reliable uncertainty estimates. In this paper, we investigate a novel combination of a Gaussian process spatial model and a Bayesian Additive Regression Tree (BART) model. The computational burden of the approach is reduced by combining Markov chain Monte Carlo (MCMC) with the Integrated Nested Laplace Approximation (INLA) technique. We study the performance of the method via simulations and use the model to predict anthropometric responses, collected via household cluster samples in Kenya.


Hyperbolic Image-Text Representations

Desai, Karan, Nickel, Maximilian, Rajpurohit, Tanmay, Johnson, Justin, Vedantam, Ramakrishna

arXiv.org Artificial Intelligence

Visual and linguistic concepts naturally organize themselves in a hierarchy, where a textual concept "dog" entails all images that contain dogs. Despite being intuitive, current large-scale vision and language models such as CLIP do not explicitly capture such hierarchy. We propose MERU, a contrastive model that yields hyperbolic representations of images and text. Hyperbolic spaces have suitable geometric properties to embed tree-like data, so MERU can better capture the underlying hierarchy in image-text datasets. Our results show that MERU learns a highly interpretable and structured representation space while being competitive with CLIP's performance on standard multi-modal tasks like image classification and image-text retrieval.


PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation

Liu, Ao, Dong, Haoyu, Okazaki, Naoaki, Han, Shi, Zhang, Dongmei

arXiv.org Artificial Intelligence

Logical table-to-text generation is a task that involves generating logically faithful sentences from tables, which requires models to derive logical level facts from table records via logical inference. It raises a new challenge on the logical-level content planning of table-to-text models. However, directly learning the logical inference knowledge from table-text pairs is very difficult for neural models because of the ambiguity of natural language and the scarcity of parallel data. Hence even large-scale pre-trained language models present low logical fidelity on logical table-to-text. In this work, we propose a PLOG (Pretrained Logical Form Generator) framework to improve the generation fidelity. Specifically, PLOG is first pretrained on a table-to-logic-form generation (table-to-logic) task, then finetuned on downstream table-to-text tasks. The formal definition of logical forms enables us to collect large amount of accurate logical forms from tables without human annotation. In addition, PLOG can learn logical inference from table-logic pairs much more definitely than from table-text pairs. To evaluate our model, we further collect a controlled logical table-to-text dataset CONTLOG based on an existing dataset. On two benchmarks, LOGICNLG and CONTLOG, PLOG outperforms strong baselines by a large margin on the logical fidelity, demonstrating the effectiveness of table-to-logic pretraining.


Deep Generative Models for Reject Inference in Credit Scoring

Mancisidor, Rogelio A., Kampffmeyer, Michael, Aas, Kjersti, Jenssen, Robert

arXiv.org Machine Learning

Credit scoring models based on accepted applications may be biased and their consequences can have a statistical and economic impact. Reject inference is the process of attempting to infer the creditworthiness status of the rejected applications. In this research, we use deep generative models to develop two new semi-supervised Bayesian models for reject inference in credit scoring, in which we model the data generating process to be dependent on a Gaussian mixture. The goal is to improve the classification accuracy in credit scoring models by adding reject applications. Our proposed models infer the unknown creditworthiness of the rejected applications by exact enumeration of the two possible outcomes of the loan (default or non-default). The efficient stochastic gradient optimization technique used in deep generative models makes our models suitable for large data sets. Finally, the experiments in this research show that our proposed models perform better than classical and alternative machine learning models for reject inference in credit scoring.